information alignment
Geographical Information Alignment Boosts Traffic Analysis via Transpose Cross-attention
Jiang, Xiangyu, Chen, Xiwen, Wang, Hao, Razi, Abolfazl
Traffic accident prediction is crucial for enhancing road safety and mitigating congestion, and recent Graph Neural Networks (GNNs) have shown promise in modeling the inherent graph-based traffic data. However, existing GNN- based approaches often overlook or do not explicitly exploit geographic position information, which often plays a critical role in understanding spatial dependencies. This is also aligned with our observation, where accident locations are often highly relevant. To address this issue, we propose a plug-in-and-play module for common GNN frameworks, termed Geographic Information Alignment (GIA). This module can efficiently fuse the node feature and geographic position information through a novel Transpose Cross-attention mechanism. Due to the large number of nodes for traffic data, the conventional cross-attention mechanism performing the node-wise alignment may be infeasible in computation-limited resources. Instead, we take the transpose operation for Query, Key, and Value in the Cross-attention mechanism, which substantially reduces the computation cost while maintaining sufficient information. Experimental results for both traffic occurrence prediction and severity prediction (severity levels based on the interval of recorded crash counts) on large-scale city-wise datasets confirm the effectiveness of our proposed method. For example, our method can obtain gains ranging from 1.3% to 10.9% in F1 score and 0.3% to 4.8% in AUC.
- North America > United States > California > Los Angeles County > Los Angeles (0.15)
- North America > United States > Texas > Harris County > Houston (0.04)
- North America > United States > Texas > Dallas County > Dallas (0.04)
- (3 more...)
Researchers Develop A Unified Framework For Evaluating Natural Language Generation (NLG)
Natural language generation (NLG) is a broad term that encompasses a variety of tasks that generate fluent text from input data and other contextual information. In actuality, the goals of these jobs are frequently very different. Some well-known instances of NLG include compressing a source article into a brief paragraph conveying the most significant information, converting content presented in one language into another, and creating unique responses to drive the discourse. Natural language processing has advanced at a breakneck pace in terms of enhancing and developing new models for various jobs. However, assessing NLG remains difficult: human judgment is considered the gold standard, but it is typically costly and time-consuming to get.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.06)
- North America > United States > California > San Diego County > San Diego (0.06)
Compression, transduction, and creation: a unified framework for evaluating natural language generation
Figure 1: Our framework classifies language generation tasks into compression, transduction, and creation (left), and unifies the evaluation (middle) of key quality aspects with the common operation of information alignment (right). TL;DR: Evaluating natural language generation (NLG) is hard. Our general framework helps solve the difficulty by unifying the evaluation with a common central operation. Inspired metrics achieve SOTA correlations with human judgments on diverse NLG tasks. Our metrics are available as library on PyPI and GitHub.